Data Wrangling with Pandas

Now that we have been exposed to the basic functionality of Pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.

As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.


In [1]:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')

Date/Time data handling

Date and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis.

The datetime built-in library handles temporal information down to the nanosecond.


In [2]:
from datetime import datetime

In [3]:
now = datetime.now()
now


Out[3]:
datetime.datetime(2016, 10, 3, 14, 50, 56, 656895)

In [4]:
now.day


Out[4]:
3

In [5]:
now.weekday()


Out[5]:
0

In addition to datetime there are simpler objects for date and time information only, respectively.


In [6]:
from datetime import date, time

In [7]:
time(3, 24)


Out[7]:
datetime.time(3, 24)

In [8]:
date(1970, 9, 3)


Out[8]:
datetime.date(1970, 9, 3)

Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times:


In [9]:
my_age = now - datetime(1970, 1, 1)
my_age


Out[9]:
datetime.timedelta(17077, 53456, 656895)

In [10]:
print(type(my_age))
my_age.days/365


<class 'datetime.timedelta'>
Out[10]:
46.78630136986301

In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.

The International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.

For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.


In [11]:
segments = pd.read_csv("Data/AIS/transit_segments.csv")
segments.head()


Out[11]:
mmsi name transit segment seg_length avg_sog min_sog max_sog pdgt10 st_time end_time
0 1 Us Govt Ves 1 1 5.1 13.2 9.2 14.5 96.5 2/10/09 16:03 2/10/09 16:27
1 1 Dredge Capt Frank 1 1 13.5 18.6 10.4 20.6 100.0 4/6/09 14:31 4/6/09 15:20
2 1 Us Gov Vessel 1 1 4.3 16.2 10.3 20.5 100.0 4/6/09 14:36 4/6/09 14:55
3 1 Us Gov Vessel 2 1 9.2 15.4 14.5 16.1 100.0 4/10/09 17:58 4/10/09 18:34
4 1 Dredge Capt Frank 2 1 9.2 15.4 14.6 16.2 100.0 4/10/09 17:59 4/10/09 18:35

For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram:


In [12]:
segments.seg_length.hist(bins=500)


Out[12]:
<matplotlib.axes._subplots.AxesSubplot at 0x11ab88908>

Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful:


In [13]:
segments.seg_length.apply(np.log).hist(bins=500)


Out[13]:
<matplotlib.axes._subplots.AxesSubplot at 0x1165579e8>

We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime.


In [14]:
segments.st_time.dtype


Out[14]:
dtype('O')

Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information.


In [15]:
datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M')


Out[15]:
datetime.datetime(2009, 2, 10, 16, 3)

The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically.


In [16]:
from dateutil.parser import parse

In [17]:
parse(segments.st_time.ix[0])


Out[17]:
datetime.datetime(2009, 2, 10, 16, 3)

We can convert all the dates in a particular column by using the apply method.


In [18]:
segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M'))


Out[18]:
0        2009-02-10 16:03:00
1        2009-04-06 14:31:00
2        2009-04-06 14:36:00
3        2009-04-10 17:58:00
4        2009-04-10 17:59:00
5        2010-03-20 16:06:00
6        2010-03-20 18:05:00
7        2011-05-04 11:28:00
8        2010-06-05 11:23:00
9        2010-06-08 11:03:00
10       2010-06-14 17:07:00
11       2010-06-17 16:35:00
12       2010-07-11 11:51:00
13       2010-07-12 15:33:00
14       2010-07-20 19:23:00
15       2010-09-10 14:00:00
16       2011-03-14 16:13:00
17       2011-03-18 11:18:00
18       2011-04-25 16:37:00
19       2011-05-14 15:51:00
20       2011-05-19 12:34:00
21       2011-05-23 21:29:00
22       2011-05-25 11:46:00
23       2011-06-01 14:32:00
24       2011-06-07 14:43:00
25       2011-08-08 15:42:00
26       2011-09-06 20:59:00
27       2011-09-08 08:22:00
28       2011-09-12 14:56:00
29       2011-10-25 12:52:00
                 ...        
262496   2009-11-22 02:13:00
262497   2009-11-22 02:15:00
262498   2009-12-02 01:46:00
262499   2009-12-02 03:12:00
262500   2009-12-06 16:09:00
262501   2009-12-06 16:21:00
262502   2009-12-11 23:49:00
262503   2009-12-15 22:23:00
262504   2009-12-15 22:51:00
262505   2009-12-31 10:17:00
262506   2010-01-01 11:22:00
262507   2010-01-13 17:40:00
262508   2010-01-13 18:07:00
262509   2010-01-24 05:53:00
262510   2010-03-20 05:14:00
262511   2010-04-14 15:09:00
262512   2010-04-19 00:20:00
262513   2010-05-13 14:13:00
262514   2010-05-19 19:46:00
262515   2010-05-31 14:27:00
262516   2010-06-05 05:25:00
262517   2010-06-27 02:35:00
262518   2010-07-01 03:49:00
262519   2010-07-02 03:30:00
262520   2010-06-13 10:32:00
262521   2010-06-15 12:49:00
262522   2010-06-15 21:32:00
262523   2010-06-17 19:16:00
262524   2010-06-18 02:52:00
262525   2010-06-18 10:19:00
Name: st_time, dtype: datetime64[ns]

As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects.


In [19]:
pd.to_datetime(segments.st_time[:10])


Out[19]:
0   2009-02-10 16:03:00
1   2009-04-06 14:31:00
2   2009-04-06 14:36:00
3   2009-04-10 17:58:00
4   2009-04-10 17:59:00
5   2010-03-20 16:06:00
6   2010-03-20 18:05:00
7   2011-05-04 11:28:00
8   2010-06-05 11:23:00
9   2010-06-08 11:03:00
Name: st_time, dtype: datetime64[ns]

Pandas also has a custom NA value for missing datetime objects, NaT.


In [20]:
pd.to_datetime([None])


Out[20]:
DatetimeIndex(['NaT'], dtype='datetime64[ns]', freq=None)

Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.

The read_* functions now have an optional parse_dates argument that try to convert any columns passed to it into datetime format upon import:


In [21]:
segments = pd.read_csv("Data/AIS/transit_segments.csv", parse_dates=['st_time', 'end_time'])

In [22]:
segments.dtypes


Out[22]:
mmsi                   int64
name                  object
transit                int64
segment                int64
seg_length           float64
avg_sog              float64
min_sog              float64
max_sog              float64
pdgt10               float64
st_time       datetime64[ns]
end_time      datetime64[ns]
dtype: object

Columns of the datetime type have an accessor to easily extract properties of the data type. This will return a Series, with the same row index as the DataFrame. For example:


In [23]:
segments.st_time.dt.month.head()


Out[23]:
0    2
1    4
2    4
3    4
4    4
Name: st_time, dtype: int64

In [24]:
segments.st_time.dt.hour.head()


Out[24]:
0    16
1    14
2    14
3    17
4    17
Name: st_time, dtype: int64

This can be used to easily filter rows by particular temporal attributes:


In [25]:
segments[segments.st_time.dt.month==2].head()


Out[25]:
mmsi name transit segment seg_length avg_sog min_sog max_sog pdgt10 st_time end_time
0 1 Us Govt Ves 1 1 5.1 13.2 9.2 14.5 96.5 2009-02-10 16:03:00 2009-02-10 16:27:00
78 3011 Charleston 16 1 18.9 0.3 0.0 16.1 0.5 2010-02-07 07:26:00 2010-02-09 19:27:00
79 3011 Charleston 17 1 19.2 0.3 0.0 6.4 0.0 2010-02-11 16:56:00 2010-02-14 14:31:00
80 3011 Charleston 18 1 24.7 0.3 0.0 5.7 0.0 2010-02-19 11:53:00 2010-02-22 16:50:00
81 3011 Charleston 19 1 40.1 0.4 0.0 16.3 0.1 2010-02-23 15:15:00 2010-03-02 14:25:00

In addition, time zone information can be applied:


In [26]:
segments.st_time.dt.tz_localize('UTC').head()


Out[26]:
0   2009-02-10 16:03:00+00:00
1   2009-04-06 14:31:00+00:00
2   2009-04-06 14:36:00+00:00
3   2009-04-10 17:58:00+00:00
4   2009-04-10 17:59:00+00:00
Name: st_time, dtype: datetime64[ns, UTC]

In [27]:
segments.st_time.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').head()


Out[27]:
0   2009-02-10 11:03:00-05:00
1   2009-04-06 10:31:00-04:00
2   2009-04-06 10:36:00-04:00
3   2009-04-10 13:58:00-04:00
4   2009-04-10 13:59:00-04:00
Name: st_time, dtype: datetime64[ns, US/Eastern]

Merging and joining DataFrame objects

Now that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table.


In [28]:
vessels = pd.read_csv("Data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()


Out[28]:
num_names names sov flag flag_type num_loas loa max_loa num_types type
mmsi
1 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing
9 3 000000009/Raven/Shearwater N Unknown Unknown 2 50.0/62.0 62.0 2 Pleasure/Tug
21 1 Us Gov Vessel Y Unknown Unknown 1 208.0 208.0 1 Unknown
74 2 Mcfaul/Sarah Bell N Unknown Unknown 1 155.0 155.0 1 Unknown
103 3 Ron G/Us Navy Warship 103/Us Warship 103 Y Unknown Unknown 2 26.0/155.0 155.0 2 Tanker/Unknown

In [29]:
[v for v in vessels.type.unique() if v.find('/')==-1]


Out[29]:
['Unknown',
 'Other',
 'Tug',
 'Towing',
 'Pleasure',
 'Cargo',
 'WIG',
 'Fishing',
 'BigTow',
 'MilOps',
 'Tanker',
 'Passenger',
 'SAR',
 'Sailing',
 'Reserved',
 'Law',
 'Dredging',
 'AntiPol',
 'Pilot',
 'HSC',
 'Diving',
 'Resol-18',
 'Tender',
 'Spare',
 'Medical']

In [30]:
vessels.type.value_counts()


Out[30]:
Cargo                                                                                                   5622
Tanker                                                                                                  2440
Pleasure                                                                                                 601
Tug                                                                                                      221
Sailing                                                                                                  205
Fishing                                                                                                  200
Other                                                                                                    178
Passenger                                                                                                150
Towing                                                                                                   117
Unknown                                                                                                  106
MilOps                                                                                                    82
Cargo/Unknown                                                                                             79
Cargo/Other                                                                                               63
Cargo/Tanker                                                                                              42
Towing/Tug                                                                                                30
Tanker/Unknown                                                                                            30
BigTow                                                                                                    29
Fishing/Pleasure                                                                                          24
Dredging                                                                                                  24
Reserved                                                                                                  20
Cargo/Reserved                                                                                            19
BigTow/Towing                                                                                             19
Other/Tanker                                                                                              16
Tug/Unknown                                                                                               15
Cargo/MilOps                                                                                              15
BigTow/Tug                                                                                                15
HSC                                                                                                       14
SAR                                                                                                       13
Other/Pleasure                                                                                            11
BigTow/Towing/Tug                                                                                         11
                                                                                                        ... 
Fishing/Towing/Tug                                                                                         1
AntiPol/Other                                                                                              1
Cargo/Dredging                                                                                             1
Fishing/Pilot/Unknown                                                                                      1
Other/Spare/Towing/Tug/WIG                                                                                 1
AntiPol/Fishing/Pleasure                                                                                   1
MilOps/Pleasure/Unknown                                                                                    1
BigTow/Reserved/Towing/Tug                                                                                 1
Reserved/Towing/Tug                                                                                        1
Other/WIG                                                                                                  1
BigTow/Cargo/Dredging/Reserved                                                                             1
BigTow/Reserved/Towing/Tug/WIG                                                                             1
Dredging/Other/Unknown                                                                                     1
Medical                                                                                                    1
Other/Reserved/Tug                                                                                         1
Cargo/Fishing/MilOps/Other/Pleasure/Sailing/Spare/Tender/Towing                                            1
Diving/Other                                                                                               1
BigTow/Fishing/Tug                                                                                         1
SAR/Tanker                                                                                                 1
Law/MilOps/Reserved/SAR/Tanker                                                                             1
Passenger/Pleasure/Sailing                                                                                 1
Cargo/Towing                                                                                               1
Cargo/Tanker/Tug                                                                                           1
BigTow/Cargo/Unknown                                                                                       1
SAR/Tug                                                                                                    1
BigTow/Cargo/Diving/Dredging/Fishing/MilOps/Other/Passenger/Pilot/Pleasure/Sailing/Tanker/Towing/Tug       1
Pleasure/Tender                                                                                            1
BigTow/Other                                                                                               1
BigTow/Towing/WIG                                                                                          1
Sailing/Unknown                                                                                            1
Name: type, dtype: int64

The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.

In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example:


In [31]:
df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=list(range(3))+list(range(3)), 
                        score=np.random.random(size=6)))

df1


Out[31]:
age id
0 22 0
1 29 1
2 21 2
3 26 3

In [32]:
df2


Out[32]:
id score
0 0 0.762985
1 1 0.970899
2 2 0.840192
3 0 0.559040
4 1 0.004083
5 2 0.935314

In [33]:
pd.merge(df1, df2)


Out[33]:
age id score
0 22 0 0.762985
1 22 0 0.559040
2 29 1 0.970899
3 29 1 0.004083
4 21 2 0.840192
5 21 2 0.935314

Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.

Notice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables.


In [34]:
pd.merge(df1, df2, how='outer')


Out[34]:
age id score
0 22 0 0.762985
1 22 0 0.559040
2 29 1 0.970899
3 29 1 0.004083
4 21 2 0.840192
5 21 2 0.935314
6 26 3 NaN

The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.

Looking at the two datasets that we wish to merge:


In [35]:
segments.head(1)


Out[35]:
mmsi name transit segment seg_length avg_sog min_sog max_sog pdgt10 st_time end_time
0 1 Us Govt Ves 1 1 5.1 13.2 9.2 14.5 96.5 2009-02-10 16:03:00 2009-02-10 16:27:00

In [36]:
vessels.head(1)


Out[36]:
num_names names sov flag flag_type num_loas loa max_loa num_types type
mmsi
1 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing

we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.


In [37]:
segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')

In [38]:
segments_merged.head()


Out[38]:
num_names names sov flag flag_type num_loas loa max_loa num_types type ... name transit segment seg_length avg_sog min_sog max_sog pdgt10 st_time end_time
0 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Us Govt Ves 1 1 5.1 13.2 9.2 14.5 96.5 2009-02-10 16:03:00 2009-02-10 16:27:00
1 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Dredge Capt Frank 1 1 13.5 18.6 10.4 20.6 100.0 2009-04-06 14:31:00 2009-04-06 15:20:00
2 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Us Gov Vessel 1 1 4.3 16.2 10.3 20.5 100.0 2009-04-06 14:36:00 2009-04-06 14:55:00
3 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Us Gov Vessel 2 1 9.2 15.4 14.5 16.1 100.0 2009-04-10 17:58:00 2009-04-10 18:34:00
4 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Dredge Capt Frank 2 1 9.2 15.4 14.6 16.2 100.0 2009-04-10 17:59:00 2009-04-10 18:35:00

5 rows × 21 columns

In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.

Notice that mmsi field that was an index on the vessels table is no longer an index on the merged table.

Here, we used the merge function to perform the merge; we could also have used the merge method for either of the tables:


In [39]:
vessels.merge(segments, left_index=True, right_on='mmsi').head()


Out[39]:
num_names names sov flag flag_type num_loas loa max_loa num_types type ... name transit segment seg_length avg_sog min_sog max_sog pdgt10 st_time end_time
0 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Us Govt Ves 1 1 5.1 13.2 9.2 14.5 96.5 2009-02-10 16:03:00 2009-02-10 16:27:00
1 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Dredge Capt Frank 1 1 13.5 18.6 10.4 20.6 100.0 2009-04-06 14:31:00 2009-04-06 15:20:00
2 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Us Gov Vessel 1 1 4.3 16.2 10.3 20.5 100.0 2009-04-06 14:36:00 2009-04-06 14:55:00
3 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Us Gov Vessel 2 1 9.2 15.4 14.5 16.1 100.0 2009-04-10 17:58:00 2009-04-10 18:34:00
4 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... Dredge Capt Frank 2 1 9.2 15.4 14.6 16.2 100.0 2009-04-10 17:59:00 2009-04-10 18:35:00

5 rows × 21 columns

Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.


In [40]:
segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()


Out[40]:
num_names names sov flag flag_type num_loas loa max_loa num_types type_x ... transit segment seg_length avg_sog min_sog max_sog pdgt10 st_time end_time type_y
0 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... 1 1 5.1 13.2 9.2 14.5 96.5 2009-02-10 16:03:00 2009-02-10 16:27:00 foo
1 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... 1 1 13.5 18.6 10.4 20.6 100.0 2009-04-06 14:31:00 2009-04-06 15:20:00 foo
2 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... 1 1 4.3 16.2 10.3 20.5 100.0 2009-04-06 14:36:00 2009-04-06 14:55:00 foo
3 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... 2 1 9.2 15.4 14.5 16.1 100.0 2009-04-10 17:58:00 2009-04-10 18:34:00 foo
4 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 156.0 4 Dredging/MilOps/Reserved/Towing ... 2 1 9.2 15.4 14.6 16.2 100.0 2009-04-10 17:59:00 2009-04-10 18:35:00 foo

5 rows × 22 columns

This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.

Concatenation

A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience "functions" c_ and r_:


In [41]:
np.concatenate([np.random.random(5), np.random.random(5)])


Out[41]:
array([ 0.3037552 ,  0.0835749 ,  0.00445532,  0.42221901,  0.89411269,
        0.68678631,  0.16345956,  0.63054661,  0.49350511,  0.3624152 ])

In [42]:
np.r_[np.random.random(5), np.random.random(5)]


Out[42]:
array([ 0.22211729,  0.4979635 ,  0.95492184,  0.64020983,  0.0069188 ,
        0.85127543,  0.48181569,  0.84692906,  0.56313583,  0.32771225])

In [43]:
np.c_[np.random.random(5), np.random.random(5)]


Out[43]:
array([[ 0.84325214,  0.12362461],
       [ 0.41088105,  0.29898623],
       [ 0.11299879,  0.74754527],
       [ 0.98714898,  0.46669562],
       [ 0.9958373 ,  0.27416671]])

Notice that c_ and r_ are not really functions at all, since it is performing some sort of indexing operation, rather than being called. They are actually class instances, but they are here behaving mostly like functions. Don't think about this too hard; just know that they are there.

This operation is also called binding or stacking.

With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.

Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index.


In [44]:
mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape


Out[44]:
((272, 1), (288, 1))

In [45]:
mb1.head()


Out[45]:
1
0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella 7

Let's give the index and columns meaningful labels:


In [46]:
mb1.columns = mb2.columns = ['Count']

In [47]:
mb1.index.name = mb2.index.name = 'Taxon'

In [48]:
mb1.head()


Out[48]:
Count
Taxon
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella 7

The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.


In [49]:
mb1.index[:3]


Out[49]:
Index(['Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera',
       'Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus',
       'Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus'],
      dtype='object', name='Taxon')

In [50]:
mb1.index.is_unique


Out[50]:
True

If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated:


In [51]:
pd.concat([mb1, mb2], axis=0).shape


Out[51]:
(560, 1)

However, the index is no longer unique, due to overlap between the two DataFrames.


In [52]:
pd.concat([mb1, mb2], axis=0).index.is_unique


Out[52]:
False

Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.


In [53]:
pd.concat([mb1, mb2], axis=1).shape


Out[53]:
(438, 2)

In [54]:
pd.concat([mb1, mb2], axis=1).head()


Out[54]:
Count Count
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus NaN 2.0
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera NaN 14.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7.0 23.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus NaN 1.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera NaN 2.0

If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.


In [55]:
pd.concat([mb1, mb2], axis=1, join='inner').head()


Out[55]:
Count Count
Taxon
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7 23
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2 2
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3 10
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3 9
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella 7 9

If we wanted to use the second table to fill values absent from the first table, we could use combine_first.


In [56]:
mb1.combine_first(mb2).head()


Out[56]:
Count
Taxon
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus 2.0
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera 14.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus 1.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera 2.0

We can also create a hierarchical index based on keys identifying the original tables.


In [57]:
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()


Out[57]:
Count
Taxon
patient1 Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella 7

In [58]:
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique


Out[58]:
True

Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict, resulting in a "wide" format table.


In [59]:
pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()


Out[59]:
patient1 patient2
Count Count
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus NaN 2.0
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera NaN 14.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7.0 23.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus NaN 1.0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera NaN 2.0

If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.

Exercise 1

In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame.


In [60]:
!ls Data/microbiome/


MID1.xls     MID3.xls     MID5.xls     MID7.xls     MID9.xls
MID2.xls     MID4.xls     MID6.xls     MID8.xls     metadata.xls

Solution

For each MID* File we add the 3 information contains in the metadata file: BARCODE, GROUP, SAMPLE

The function read_file take as parameter a file, metadata (the row of the metadata file corresponding to the file) and the list of the header of the file metadata

Then it read the file and add the new column based on the corresponding metadata

Finnaly it return the dataFrame


In [193]:
def read_file(file, metadata, headerMetadata):
    d = pd.read_excel(file, 'Sheet 1', index_col=0, header=None)
    for (header, value) in zip(headerMetadata, metadata[1]):
        d[header] = value
    return d

load_data() read the metadata file and each MID* file using read_file and concat all the files together


In [196]:
import glob # to find all files in folder

def load_data():
    metadata = pd.read_excel('Data/microbiome/metadata.xls', 'Sheet1')
    headerMetadata = metadata.columns.values.tolist()
    files=glob.glob('Data/microbiome/MID*.xls')
    data = pd.concat(read_file(file, meta, headerMetadata) for (file,meta) in zip(files, metadata.iterrows()))
                     
    return data

loaded = load_data()

In [197]:
metadata
loaded


Out[197]:
1 BARCODE GROUP SAMPLE
0
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7 MID1 EXTRACTION CONTROL NaN
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2 MID1 EXTRACTION CONTROL NaN
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3 MID1 EXTRACTION CONTROL NaN
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella 7 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" "Methanomicrobia" Methanosarcinales Methanosarcinaceae Methanimicrococcus 1 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" "Methanomicrobia" Methanosarcinales Methermicoccaceae Methermicoccus 1 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Archaeoglobi Archaeoglobales Archaeoglobaceae Ferroglobus 1 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Archaeoglobi Archaeoglobales Archaeoglobaceae Geoglobus 1 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Halobacteria Halobacteriales Halobacteriaceae Haloplanus 1 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Halobacteria Halobacteriales Halobacteriaceae Haloquadratum 4 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Halobacteria Halobacteriales Halobacteriaceae Halosimplex 1 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Halobacteria Halobacteriales Halobacteriaceae Natronobacterium 2 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Halobacteria Halobacteriales Halobacteriaceae Natronomonas 4 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Methanococci Methanococcales Methanocaldococcaceae Methanotorris 1 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Methanopyri Methanopyrales Methanopyraceae Methanopyrus 12 MID1 EXTRACTION CONTROL NaN
Archaea "Euryarchaeota" Thermoplasmata Thermoplasmatales Thermoplasmatales_incertae_sedis Thermogymnomonas 2 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Acidothermaceae Acidothermus 2 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Beutenbergiaceae Salana 1 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Bogoriellaceae Bogoriella 1 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Cellulomonadaceae Tropheryma 1 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Corynebacteriaceae Turicella 2 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Cryptosporangiaceae Cryptosporangium 53 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Dermabacteraceae Dermabacter 1 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Dermabacteraceae Devriesea 2 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Dermatophilaceae Kineosphaera 3 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Frankineae_incertae_sedis Fodinicola 1 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Geodermatophilaceae Blastococcus 99 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Geodermatophilaceae Geodermatophilus 17 MID1 EXTRACTION CONTROL NaN
Bacteria "Actinobacteria" Actinobacteria Actinomycetales Intrasporangiaceae Kribbia 2 MID1 EXTRACTION CONTROL NaN
... ... ... ... ...
Bacteria "Proteobacteria" Gammaproteobacteria Oceanospirillales Oceanospirillaceae Neptuniibacter 10 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Oceanospirillales Oceanospirillales_incertae_sedis Spongiispira 1 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Oceanospirillales Oleiphilaceae Oleiphilus 1 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Pasteurellales Pasteurellaceae Haemophilus 1 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Pseudomonadales Moraxellaceae Alkanindiges 12 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Pseudomonadales Moraxellaceae Perlucidibaca 4 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Pseudomonadales Pseudomonadaceae Azomonas 805 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Pseudomonadales Pseudomonadaceae Azorhizophilus 862 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Pseudomonadales Pseudomonadaceae Azotobacter 12 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Pseudomonadales Pseudomonadaceae Pseudomonas 3534 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Thiotrichales Piscirickettsiaceae Cycloclasticus 4 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Thiotrichales Piscirickettsiaceae Sulfurivirga 1 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Thiotrichales Thiotrichaceae Beggiatoa 25 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Thiotrichales Thiotrichales_incertae_sedis Fangia 2 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Xanthomonadales Sinobacteraceae Alkanibacter 3 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Xanthomonadales Sinobacteraceae Sinobacter 2 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Xanthomonadales Xanthomonadaceae Aquimonas 1 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Xanthomonadales Xanthomonadaceae Frateuria 3 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Xanthomonadales Xanthomonadaceae Ignatzschineria 2 MID9 Control 2 stool
Bacteria "Proteobacteria" Gammaproteobacteria Xanthomonadales Xanthomonadaceae Luteibacter 1 MID9 Control 2 stool
Bacteria "Synergistetes" Synergistia Synergistales Synergistaceae Aminomonas 3 MID9 Control 2 stool
Bacteria "Tenericutes" Mollicutes Anaeroplasmatales Anaeroplasmataceae Asteroleplasma 1 MID9 Control 2 stool
Bacteria "Tenericutes" Mollicutes Haloplasmatales Haloplasmataceae Haloplasma 11 MID9 Control 2 stool
Bacteria "Thermodesulfobacteria" Thermodesulfobacteria Thermodesulfobacteriales Thermodesulfobacteriaceae Caldimicrobium 1 MID9 Control 2 stool
Bacteria "Thermodesulfobacteria" Thermodesulfobacteria Thermodesulfobacteriales Thermodesulfobacteriaceae Thermodesulfatator 1 MID9 Control 2 stool
Bacteria "Thermotogae" Thermotogae Thermotogales Thermotogaceae Geotoga 43 MID9 Control 2 stool
Bacteria "Thermotogae" Thermotogae Thermotogales Thermotogaceae Kosmotoga 16 MID9 Control 2 stool
Bacteria "Verrucomicrobia" Opitutae Opitutales Opitutaceae Opitutus 1 MID9 Control 2 stool
Bacteria Cyanobacteria Cyanobacteria Chloroplast Chlorarachniophyceae 1 MID9 Control 2 stool
Bacteria Cyanobacteria Cyanobacteria Chloroplast Streptophyta 10 MID9 Control 2 stool

2396 rows × 4 columns


In [80]:
metadata


Out[80]:
BARCODE GROUP SAMPLE
0 MID1 EXTRACTION CONTROL NaN
1 MID2 NEC 1 tissue
2 MID3 Control 1 tissue
3 MID4 NEC 2 tissue
4 MID5 Control 2 tissue
5 MID6 NEC 1 stool
6 MID7 Control 1 stool
7 MID8 NEC 2 stool
8 MID9 Control 2 stool

Reshaping DataFrame objects

In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.

This dataset is from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.

  • Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
  • Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
  • TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began

In [ ]:
cdystonia = pd.read_csv("Data/cdystonia.csv", index_col=None)
cdystonia.head()

This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.

The stack method rotates the data frame so that columns are represented in rows:


In [ ]:
stacked = cdystonia.stack()
stacked

To complement this, unstack pivots from rows back to columns.


In [ ]:
stacked.unstack().head()

For this dataset, it makes sense to create a hierarchical index based on the patient and observation:


In [ ]:
cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()

In [ ]:
cdystonia2.index.is_unique

If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.


In [ ]:
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()

In [ ]:
cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]
                  .drop_duplicates()
                  .merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
                  .head())
cdystonia_wide

A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:


In [ ]:
(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']
     .unstack('week').head())

To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.


In [ ]:
pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'], 
        var_name='obs', value_name='twsters').head()

This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.

The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.

Pivoting

The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.

For example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:


In [ ]:
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()

If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table:


In [ ]:
cdystonia.pivot('patient', 'obs')

A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.


In [ ]:
cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs', 
                      aggfunc=max).head(20)

For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.


In [ ]:
pd.crosstab(cdystonia.sex, cdystonia.site)

Data transformation

There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" which include tasks such as removing duplicate values, replacing values, and grouping values.

Dealing with duplicates

We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name:


In [ ]:
vessels.duplicated(subset='names')

In [ ]:
vessels.drop_duplicates(['names'])

Value replacement

Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:


In [ ]:
cdystonia.treat.value_counts()

A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.


In [ ]:
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}

In [ ]:
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment

Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.

An example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values:


In [ ]:
vals = pd.Series([float(i)**10 for i in range(10)])
vals

In [ ]:
np.log(vals)

In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace.


In [ ]:
vals = vals.replace(0, 1e-6)
np.log(vals)

We can also perform the same replacement that we used map for with replace:


In [ ]:
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})

Inidcator variables

For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.

Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships:


In [ ]:
top5 = vessels.type.isin(vessels.type.value_counts().index[:5])
top5.head(10)

In [ ]:
vessels5 = vessels[top5]

In [ ]:
pd.get_dummies(vessels5.type).head(10)

Categorical Data

Pandas provides a convenient dtype for reprsenting categorical (factor) data, called category.

For example, the treat column in the cervical dystonia dataset represents three treatment levels in a clinical trial, and is imported by default as an object type, since it is a mixture of string characters.


In [ ]:
cdystonia.treat.head()

We can convert this to a category type either by the Categorical constructor, or casting the column using astype:


In [ ]:
pd.Categorical(cdystonia.treat)

In [ ]:
cdystonia['treat'] = cdystonia.treat.astype('category')

In [ ]:
cdystonia.treat.describe()

By default the Categorical type represents an unordered categorical.


In [ ]:
cdystonia.treat.cat.categories

However, an ordering can be imposed. The order is lexical by default, but will assume the order of the listed categories to be the desired order.


In [ ]:
cdystonia.treat.cat.categories = ['Placebo', '5000U', '10000U']

In [ ]:
cdystonia.treat.cat.as_ordered().head()

The important difference between the category type and the object type is that category is represented by an underlying array of integers, which is then mapped to character labels.


In [ ]:
cdystonia.treat.cat.codes

Notice that these are 8-bit integers, which are essentially single bytes of data, making memory usage lower.

There is also a performance benefit. Consider an operation such as calculating the total segment lengths for each ship in the segments table (this is also a preview of pandas' groupby operation!):


In [ ]:
%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()

In [ ]:
segments['name'] = segments.name.astype('category')

In [ ]:
%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()

Hence, we get a considerable speedup simply by using the appropriate dtype for our data.

Discretization

Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!

Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:


In [ ]:
cdystonia.age.describe()

Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:


In [ ]:
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]

The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False:


In [ ]:
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]

Since the data are now ordinal, rather than numeric, we can give them labels:


In [ ]:
pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]

A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:


In [ ]:
pd.qcut(cdystonia.age, 4)[:30]

Alternatively, one can specify custom quantiles to act as cut points:


In [ ]:
quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30]

Note that you can easily combine discretiztion with the generation of indicator variables shown above:


In [ ]:
pd.get_dummies(quantiles).head(10)

Permutation and sampling

For some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's permutation function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence.


In [ ]:
new_order = np.random.permutation(len(segments))
new_order[:30]

Using this sequence as an argument to the take method results in a reordered DataFrame:


In [ ]:
segments.take(new_order).head()

Compare this ordering with the original:


In [ ]:
segments.head()

For random sampling, DataFrame and Series objects have a sample method that can be used to draw samples, with or without replacement:


In [ ]:
vessels.sample(n=10)

In [ ]:
vessels.sample(n=10, replace=True)

Data aggregation and GroupBy operations

One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:

  • aggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results
  • slicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)
  • group-wise transformation, such as standardization/normalization

In [ ]:
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)

This grouped dataset is hard to visualize


In [ ]:
cdystonia_grouped

However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:


In [ ]:
for patient, group in cdystonia_grouped:
    print('patient', patient)
    print('group', group)

A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.

For example, we may want to aggregate our data with with some function.

*(figure taken from "Python for Data Analysis", p.251)*

We can aggregate in Pandas using the aggregate (or agg, for short) method:


In [ ]:
cdystonia_grouped.agg(np.mean).head()

Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.

Some aggregation functions are so common that Pandas has a convenience method for them, such as mean:


In [ ]:
cdystonia_grouped.mean().head()

The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:


In [ ]:
cdystonia_grouped.mean().add_suffix('_mean').head()

In [ ]:
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5)

If we wish, we can easily aggregate according to multiple keys:


In [ ]:
cdystonia.groupby(['week','site']).mean().head()

Alternately, we can transform the data, using a function of our choice with the transform method:


In [ ]:
normalize = lambda x: (x - x.mean())/x.std()

cdystonia_grouped.transform(normalize).head()

It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:


In [ ]:
cdystonia_grouped['twstrs'].mean().head()

In [ ]:
# This gives the same result as a DataFrame
cdystonia_grouped[['twstrs']].mean().head()

If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:


In [ ]:
chunks = dict(list(cdystonia_grouped))

In [ ]:
chunks[4]

By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way:


In [ ]:
grouped_by_type = cdystonia.groupby(cdystonia.dtypes, axis=1)
{g:grouped_by_type.get_group(g) for g in grouped_by_type.groups}

Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index:


In [ ]:
cdystonia2.head(10)

In [ ]:
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()

Apply

We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.

The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.


In [ ]:
def top(df, column, n=5):
    return df.sort_values(by=column, ascending=False)[:n]

To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:


In [ ]:
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments.head(15)

Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.

Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.


In [ ]:
mb1.index[:3]

Using the string methods split and join we can create an index that just uses the first three classifications: domain, phylum and class.


In [ ]:
class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3]))

In [ ]:
mb_class = mb1.copy()
mb_class.index = class_index

However, since there are multiple taxonomic units with the same class, our index is no longer unique:


In [ ]:
mb_class.head()

We can re-establish a unique index by summing all rows with the same class, using groupby:


In [ ]:
mb_class.groupby(level=0).sum().head(10)

Exercise 2

Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.


In [198]:
from IPython.core.display import HTML
HTML(filename='Data/titanic.html')


Out[198]:

Data frame:titanic3

1309 observations and 14 variables, maximum # NAs:1188
NameLabelsUnitsLevelsStorageNAs
pclass
3
integer
0
survivedSurvived
double
0
nameName
character
0
sex
2
integer
0
ageAgeYear
double
263
sibspNumber of Siblings/Spouses Aboard
double
0
parchNumber of Parents/Children Aboard
double
0
ticketTicket Number
character
0
farePassenger FareBritish Pound (\243)
double
1
cabin
187
integer
0
embarked
3
integer
2
boat
28
integer
0
bodyBody Identification Number
double
1188
home.destHome/Destination
character
0

VariableLevels
pclass1st
2nd
3rd
sexfemale
male
cabin
A10
A11
A14
A16
A18
A19
A20
A21
A23
A24
A26
A29
A31
A32
A34
A36
A5
A6
A7
A9
B10
B101
B102
B11
B18
B19
B20
B22
B24
B26
B28
B3
B30
B35
B36
B37
B38
B39
B4
B41
B42
B45
B49
B5
B50
B51 B53 B55
B52 B54 B56
B57 B59 B63 B66
B58 B60
B61
B69
B71
B73
B77
B78
B79
B80
B82 B84
B86
B94
B96 B98
C101
C103
C104
C105
C106
C110
C111
C116
C118
C123
C124
C125
C126
C128
C130
C132
C148
C2
C22 C26
C23 C25 C27
C28
C30
C31
C32
C39
C45
C46
C47
C49
C50
C51
C52
C53
C54
C55 C57
C6
C62 C64
C65
C68
C7
C70
C78
C80
C82
C83
C85
C86
C87
C89
C90
C91
C92
C93
C95
C97
C99
D
D10 D12
D11
D15
D17
D19
D20
D21
D22
D26
D28
D30
D33
D34
D35
D36
D37
D38
D40
D43
D45
D46
D47
D48
D49
D50
D56
D6
D7
D9
E10
E101
E12
E121
E17
E24
E25
E31
E33
E34
E36
E38
E39 E41
E40
E44
E45
E46
E49
E50
E52
E58
E60
E63
E67
E68
E77
E8
F
F E46
F E57
F E69
F G63
F G73
F2
F33
F38
F4
G6
T
embarkedCherbourg
Queenstown
Southampton
boat
1
10
11
12
13
13 15
13 15 B
14
15
15 16
16
2
3
4
5
5 7
5 9
6
7
8
8 10
9
A
B
C
C D
D

Women and children first?

  1. Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings.
  2. Use the groupby method to calculate the proportion of passengers that survived by sex.
  3. Calculate the same proportion, but by class and sex.
  4. Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.

Our solution is in a separate notebook: data-wrangling-last-exo.ipynb

References

Python for Data Analysis Wes McKinney